脑小血管疾病的成像标记提供了有关脑部健康的宝贵信息,但是它们的手动评估既耗时又受到实质性内部和间际变异性的阻碍。自动化评级可能受益于生物医学研究以及临床评估,但是现有算法的诊断可靠性尚不清楚。在这里,我们介绍了\ textIt {血管病变检测和分割}(\ textit {v textit {where valdo?})挑战,该挑战是在国际医学图像计算和计算机辅助干预措施(MICCAI)的卫星事件中运行的挑战(MICCAI) 2021.这一挑战旨在促进大脑小血管疾病的小而稀疏成像标记的自动检测和分割方法的开发,即周围空间扩大(EPVS)(任务1),脑微粒(任务2)和预先塑造的鞋类血管起源(任务3),同时利用弱和嘈杂的标签。总体而言,有12个团队参与了针对一个或多个任务的解决方案的挑战(任务1 -EPVS 4,任务2 -Microbleeds的9个,任务3 -lacunes的6个)。多方数据都用于培训和评估。结果表明,整个团队和跨任务的性能都有很大的差异,对于任务1- EPV和任务2-微型微型且对任务3 -lacunes尚无实际的结果,其结果尤其有望。它还强调了可能阻止个人级别使用的情况的性能不一致,同时仍证明在人群层面上有用。
translated by 谷歌翻译
The United States coastline spans 95,471 miles; a distance that cannot be effectively patrolled or secured by manual human effort alone. Unmanned Aerial Vehicles (UAVs) equipped with infrared cameras and deep-learning based algorithms represent a more efficient alternative for identifying and segmenting objects of interest - namely, ships. However, standard approaches to training these algorithms require large-scale datasets of densely labeled infrared maritime images. Such datasets are not publicly available and manually annotating every pixel in a large-scale dataset would have an extreme labor cost. In this work we demonstrate that, in the context of segmenting ships in infrared imagery, weakly-supervising an algorithm with sparsely labeled data can drastically reduce data labeling costs with minimal impact on system performance. We apply weakly-supervised learning to an unlabeled dataset of 7055 infrared images sourced from the Naval Air Warfare Center Aircraft Division (NAWCAD). We find that by sparsely labeling only 32 points per image, weakly-supervised segmentation models can still effectively detect and segment ships, with a Jaccard score of up to 0.756.
translated by 谷歌翻译
Scene text images have different shapes and are subjected to various distortions, e.g. perspective distortions. To handle these challenges, the state-of-the-art methods rely on a rectification network, which is connected to the text recognition network. They form a linear pipeline which uses text rectification on all input images, even for images that can be recognized without it. Undoubtedly, the rectification network improves the overall text recognition performance. However, in some cases, the rectification network generates unnecessary distortions on images, resulting in incorrect predictions in images that would have otherwise been correct without it. In order to alleviate the unnecessary distortions, the portmanteauing of features is proposed. The portmanteau feature, inspired by the portmanteau word, is a feature containing information from both the original text image and the rectified image. To generate the portmanteau feature, a non-linear input pipeline with a block matrix initialization is presented. In this work, the transformer is chosen as the recognition network due to its utilization of attention and inherent parallelism, which can effectively handle the portmanteau feature. The proposed method is examined on 6 benchmarks and compared with 13 state-of-the-art methods. The experimental results show that the proposed method outperforms the state-of-the-art methods on various of the benchmarks.
translated by 谷歌翻译
Scene text recognition (STR) involves the task of reading text in cropped images of natural scenes. Conventional models in STR employ convolutional neural network (CNN) followed by recurrent neural network in an encoder-decoder framework. In recent times, the transformer architecture is being widely adopted in STR as it shows strong capability in capturing long-term dependency which appears to be prominent in scene text images. Many researchers utilized transformer as part of a hybrid CNN-transformer encoder, often followed by a transformer decoder. However, such methods only make use of the long-term dependency mid-way through the encoding process. Although the vision transformer (ViT) is able to capture such dependency at an early stage, its utilization remains largely unexploited in STR. This work proposes the use of a transformer-only model as a simple baseline which outperforms hybrid CNN-transformer models. Furthermore, two key areas for improvement were identified. Firstly, the first decoded character has the lowest prediction accuracy. Secondly, images of different original aspect ratios react differently to the patch resolutions while ViT only employ one fixed patch resolution. To explore these areas, Pure Transformer with Integrated Experts (PTIE) is proposed. PTIE is a transformer model that can process multiple patch resolutions and decode in both the original and reverse character orders. It is examined on 7 commonly used benchmarks and compared with over 20 state-of-the-art methods. The experimental results show that the proposed method outperforms them and obtains state-of-the-art results in most benchmarks.
translated by 谷歌翻译
Finetuning language models on a collection of datasets phrased as instructions has been shown to improve model performance and generalization to unseen tasks. In this paper we explore instruction finetuning with a particular focus on (1) scaling the number of tasks, (2) scaling the model size, and (3) finetuning on chain-of-thought data. We find that instruction finetuning with the above aspects dramatically improves performance on a variety of model classes (PaLM, T5, U-PaLM), prompting setups (zero-shot, few-shot, CoT), and evaluation benchmarks (MMLU, BBH, TyDiQA, MGSM, open-ended generation). For instance, Flan-PaLM 540B instruction-finetuned on 1.8K tasks outperforms PALM 540B by a large margin (+9.4% on average). Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints, which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
translated by 谷歌翻译
诸如DALL-E 2之类的生成模型可以代表放射学中人工智能研究的图像生成,增强和操纵的有希望的未来工具,前提是这些模型具有足够的医疗领域知识。在这里,我们证明DALL-E 2在零拍的文本到图像生成方面,学习了具有有希望的功能的X射线图像的相关表示,将图像的延续超出其原始边界或删除元素,尽管病理产生或CT,MRI和超声图像仍然受到限制。因此,即使事先需要对这些模型进行进一步的微调和适应,也需要使用生成模型来增强和生成放射学数据似乎是可行的。
translated by 谷歌翻译
ICECUBE是一种用于检测1 GEV和1 PEV之间大气和天体中微子的光学传感器的立方公斤阵列,该阵列已部署1.45 km至2.45 km的南极的冰盖表面以下1.45 km至2.45 km。来自ICE探测器的事件的分类和重建在ICeCube数据分析中起着核心作用。重建和分类事件是一个挑战,这是由于探测器的几何形状,不均匀的散射和冰中光的吸收,并且低于100 GEV的光,每个事件产生的信号光子数量相对较少。为了应对这一挑战,可以将ICECUBE事件表示为点云图形,并将图形神经网络(GNN)作为分类和重建方法。 GNN能够将中微子事件与宇宙射线背景区分开,对不同的中微子事件类型进行分类,并重建沉积的能量,方向和相互作用顶点。基于仿真,我们提供了1-100 GEV能量范围的比较与当前ICECUBE分析中使用的当前最新最大似然技术,包括已知系统不确定性的影响。对于中微子事件分类,与当前的IceCube方法相比,GNN以固定的假阳性速率(FPR)提高了信号效率的18%。另外,GNN在固定信号效率下将FPR的降低超过8(低于半百分比)。对于能源,方向和相互作用顶点的重建,与当前最大似然技术相比,分辨率平均提高了13%-20%。当在GPU上运行时,GNN能够以几乎是2.7 kHz的中位数ICECUBE触发速率的速率处理ICECUBE事件,这打开了在在线搜索瞬态事件中使用低能量中微子的可能性。
translated by 谷歌翻译
对解剖学随时间变化的结构变化的临床研究可能会大大受益于人群水平的形状量化或时空统计形状建模(SSM)。这样的工具使患者器官周期或疾病进展相关的工具与群体有关。构造形状模型需要建立定量形状表示(例如,相应的地标)。基于粒子的形状建模(PSM)是一种数据驱动的SSM方法,可通过优化地标放置来捕获总体级别的形状变化。但是,它假设横断面研究设计,因此在代表形状随时间变化方面的统计能力有限。现有的建模时空或纵向形状变化的方法需要预定义的形状地图集和通常在横截面上构建的预先建造的形状模型。本文提出了一种受PSM方法启发的数据驱动方法,以直接从形状数据中学习人口级时空形状。我们介绍了一种新型的SSM优化方案,该方案产生了整个人群(受试者间)和跨时间序列(受试者内)的地标。我们将所提出的方法应用于心房 - 纤维化患者的4D心脏数据,并证明其在表示左心房动态变化方面的功效。此外,我们表明我们的方法在生成时间序列模型(线性动力学系统(LDS))方面优于时空SSM的基于图像的方法。 LDS使用通过我们的方法优化的时空形状模型拟合,可提供更好的概括和特异性,表明它准确地捕获了基本的时间依赖性。
translated by 谷歌翻译
传统的过程挖掘技术将事件数据作为输入,其中每个事件与一个对象完全关联。对象表示过程的实例化。以对象为中心的事件数据包含与表达多个过程相互作用的多个对象关联的事件。由于传统的过程挖掘技术假设与一个对象相关的事件,因此这些技术不能应用于以对象为中心的事件数据。为了使用传统的过程挖掘技术,通过删除所有对象引用,以一种以对象为中心的事件数据来平坦。扁平过程是有损的,导致从扁平数据中提取的不准确的特征。此外,在变平时丢失了以对象事件数据的图形结构。在本文中,我们介绍了一个通用框架,用于从对象事件数据中提取和编码功能。我们在以对象为中心的事件数据上本地计算功能,从而导致准确的度量。此外,我们为这些功能提供了三个编码:基于表格,顺序和图形。尽管表格和顺序编码已在过程挖掘中大量使用,但基于图的编码是一种保留以对象事件数据结构的新技术。我们提供六种用例:为三个编码中的每个编码中的每一个提供可视化和预测用例。我们在预测用例中使用可解释的AI来显示以对象为中心的特征的实用性以及针对预测模型的基于顺序和基于图的编码的结构。
translated by 谷歌翻译
流程的执行留下了信息系统中事件数据的痕迹。这些事件数据可以通过过程挖掘技术进行分析。对于传统的流程​​挖掘技术,必须将每个事件与一个对象(例如公司的客户)相关联。与一个对象相关的事件形成一个称为案例的事件序列。一个案例描述了通过流程进行的端到端运行。事件数据中包含的案例可用于发现过程模型,检测频繁的瓶颈或学习预测模型。但是,在现实生活中遇到的事件,例如ERP系统通常可以与多个对象关联。传统的顺序案例概念缺少这些以对象为中心的事件数据,因为这些数据显示了图形结构。一个人可能会通过使其变色将以对象为中心的事件数据迫使传统案例概念。但是,扁平化操纵数据并删除信息。因此,与传统事件日志的案例概念相似的概念对于启用以对象为中心的事件数据应用不同的过程挖掘任务是必要的。在本文中,我们介绍了以对象为中心的过程挖掘的案例概念:过程执行。这些是基于图形的案例概括,如传统过程采矿中所考虑的。此外,我们提供了提取过程执行的技术。基于这些执行,我们确定了使用图同构的属性相对于属性的等效过程行为。关于事件活动的等效过程执行是以对象为中心的变体,即传统过程挖掘中变体的概括。我们为以对象为中心的变体提供了可视化技术。贡献的可伸缩性和效率得到了广泛的评估。此外,我们提供了一个案例研究,显示了现实生活中最常见的以对象为中心的变体。
translated by 谷歌翻译